1 |
Self-Supervised Curriculum Learning for Spelling Error Correction ...
|
|
|
|
Abstract:
Anthology paper link: https://aclanthology.org/2021.emnlp-main.281/ Abstract: Spelling Error Correction (SEC) that requires high-level language understanding is a challenging but useful task. Current SEC approaches normally leverage a pre-training then fine-tuning procedure that treats data equally. By contrast, Curriculum Learning (CL) utilizes training data differently during training and has shown its effectiveness in improving both performance and training efficiency in many other NLP tasks. In NMT, a model's performance has been shown sensitive to the difficulty of training examples, and CL has been shown effective to address this. In SEC, the data from different language learners are naturally distributed at different difficulty levels (some errors made by beginners are obvious to correct while some made by fluent speakers are hard), and we expect that designing a curriculum correspondingly for model learning may also help its training and bring about better performance. In this paper, we study how to ...
|
|
Keyword:
Computational Linguistics; Machine Learning; Machine Learning and Data Mining; Natural Language Processing
|
|
URL: https://underline.io/lecture/37651-self-supervised-curriculum-learning-for-spelling-error-correction https://dx.doi.org/10.48448/dhpq-4149
|
|
BASE
|
|
Hide details
|
|
2 |
Multi-Head Highly Parallelized LSTM Decoder for Neural Machine Translation ...
|
|
|
|
BASE
|
|
Show details
|
|
3 |
Modeling Task-Aware MIMO Cardinality for Efficient Multilingual Neural Machine Translation ...
|
|
|
|
BASE
|
|
Show details
|
|
4 |
Transformer-based NMT : modeling, training and implementation
|
|
Xu, Hongfei. - : Saarländische Universitäts- und Landesbibliothek, 2021
|
|
BASE
|
|
Show details
|
|
5 |
Probing Word Translations in the Transformer and Trading Decoder for Encoder Layers ...
|
|
|
|
BASE
|
|
Show details
|
|
|
|